在非结构化环境中工作的机器人必须能够感知和解释其周围环境。机器人技术领域基于深度学习模型的主要障碍之一是缺乏针对不同工业应用的特定领域标记数据。在本文中,我们提出了一种基于域随机化的SIM2REAL传输学习方法,用于对象检测,可以自动生成任意大小和对象类型的标记的合成数据集。随后,对最先进的卷积神经网络Yolov4进行了训练,以检测不同类型的工业对象。通过提出的域随机化方法,我们可以在零射击和单次转移的情况下分别缩小现实差距,分别达到86.32%和97.38%的MAP50分数,其中包含190个真实图像。在GEFORCE RTX 2080 TI GPU上,数据生成过程的每图像少于0.5 s,培训持续约12H,这使其方便地用于工业使用。我们的解决方案符合工业需求,因为它可以通过仅使用1个真实图像进行培训来可靠地区分相似的对象类别。据我们所知,这是迄今为止满足这些约束的唯一工作。
translated by 谷歌翻译
Cell-free multi-user multiple input multiple output networks are a promising alternative to classical cellular architectures, since they have the potential to provide uniform service quality and high resource utilisation over the entire coverage area of the network. To realise this potential, previous works have developed radio resource management mechanisms using various optimisation engines. In this work, we consider the problem of overall ergodic spectral efficiency maximisation in the context of uplink-downlink data power control in cell-free networks. To solve this problem in large networks, and to address convergence-time limitations, we apply scalable multi-objective Bayesian optimisation. Furthermore, we discuss how an intersection of multi-fidelity emulation and Bayesian optimisation can improve radio resource management in cell-free networks.
translated by 谷歌翻译
Data-driven interatomic potentials have emerged as a powerful class of surrogate models for {\it ab initio} potential energy surfaces that are able to reliably predict macroscopic properties with experimental accuracy. In generating accurate and transferable potentials the most time-consuming and arguably most important task is generating the training set, which still requires significant expert user input. To accelerate this process, this work presents \text{\it hyperactive learning} (HAL), a framework for formulating an accelerated sampling algorithm specifically for the task of training database generation. The key idea is to start from a physically motivated sampler (e.g., molecular dynamics) and add a biasing term that drives the system towards high uncertainty and thus to unseen training configurations. Building on this framework, general protocols for building training databases for alloys and polymers leveraging the HAL framework will be presented. For alloys, ACE potentials for AlSi10 are created by fitting to a minimal HAL-generated database containing 88 configurations (32 atoms each) with fast evaluation times of <100 microsecond/atom/cpu-core. These potentials are demonstrated to predict the melting temperature with excellent accuracy. For polymers, a HAL database is built using ACE, able to determine the density of a long polyethylene glycol (PEG) polymer formed of 200 monomer units with experimental accuracy by only fitting to small isolated PEG polymers with sizes ranging from 2 to 32.
translated by 谷歌翻译
Density based representations of atomic environments that are invariant under Euclidean symmetries have become a widely used tool in the machine learning of interatomic potentials, broader data-driven atomistic modelling and the visualisation and analysis of materials datasets.The standard mechanism used to incorporate chemical element information is to create separate densities for each element and form tensor products between them. This leads to a steep scaling in the size of the representation as the number of elements increases. Graph neural networks, which do not explicitly use density representations, escape this scaling by mapping the chemical element information into a fixed dimensional space in a learnable way. We recast this approach as tensor factorisation by exploiting the tensor structure of standard neighbour density based descriptors. In doing so, we form compact tensor-reduced representations whose size does not depend on the number of chemical elements, but remain systematically convergeable and are therefore applicable to a wide range of data analysis and regression tasks.
translated by 谷歌翻译
通过从其计算中排除了许多随机优化的领先迭代,尾巴平均对Polyak平均的非反应行为进行了改善。实际上,具有有限数量的优化步骤和无法将其退火至零的学习率,尾巴平均可以比单个迭代或polyak平均值更接近训练损失的局部最小点。但是,引导迭代的忽略数量是重要的超参数,并且开始平均太早或太晚导致资源或次优溶液的使用效率低下。将此超参数设置为改善概括更加困难,尤其是在其他超参数和过度拟合的情况下。此外,在平均开始之前,损失只是对最终表现的淡淡信息,这使得早期停止不可靠。为了减轻这些问题,我们提出了任何时间平均变体,该变体没有超参数,并且在所有优化步骤中都近似最佳的尾巴。我们的算法基于两个运行平均值,其自适应长度以最佳的尾巴长度为界,其中一种具有一些规律性的近似最佳性。仅需要两组重量的额外存储空间和对损失的定期评估,提出的两尾平均算法是一种实用且广泛适用的方法,可用于改善随机优化。
translated by 谷歌翻译
我们研究在大型增长网络中找到根顶点的问题。我们证明,可以构建大小的置信集,而不是网络中包含root顶点的顶点的数量,在各种随机网络的各种模型中都具有很高的概率。这些模型包括均匀的随机递归dag和统一的库珀 - 弗里兹随机图。
translated by 谷歌翻译
在计算化学和材料科学中,创建快速准确的力场是一项长期挑战。最近,已经证明,几个直径传递神经网络(MPNN)超过了使用其他方法在准确性方面构建的模型。但是,大多数MPNN的计算成本高和可伸缩性差。我们建议出现这些局限性,因为MPNN仅传递两体消息,从而导致层数与网络的表达性之间的直接关系。在这项工作中,我们介绍了MACE,这是一种使用更高的车身订单消息的新型MPNN模型。特别是,我们表明,使用四体消息将所需的消息传递迭代数减少到\ emph {两},从而导致快速且高度可行的模型,达到或超过RMD17的最新准确性,3BPA和ACAC基准任务。我们还证明,使用高阶消息会导致学习曲线的陡峭程度改善。
translated by 谷歌翻译
Sigmorphon 2022关于词素分割的共享任务挑战了将单词分解为一系列词素的系统,并涵盖了大多数类型的形态:化合物,衍生和弯曲。子任务1,单词级词素细分,涵盖了9种语言的500万个单词(捷克,英语,西班牙语,匈牙利语,法语,意大利语,俄语,拉丁语,蒙古语),并收到了7个团队的13个系统提交,最佳系统平均为97.29%F1在所有语言中得分,英语(93.84%)到拉丁语(99.38%)。子任务2,句子级的词素细分,涵盖了3种语言的18,735个句子(捷克,英语,蒙古人),从3个团队中收到10个系统提交,最好的系统优于所有三种最先进的子字体化方法(BPE(BPE),Ulm,Morfessor2)绝对30.71%。为了促进错误分析并支持任何类型的未来研究,我们发布了所有系统预测,评估脚本和所有黄金标准数据集。
translated by 谷歌翻译
通用形态(UNIMORPH)项目是一项合作的努力,可为数百种世界语言实例化覆盖范围的标准化形态拐角。该项目包括两个主要的推力:一种无独立的特征架构,用于丰富的形态注释,并以各种语言意识到该模式的各种语言的带注释数据的类型级别资源。本文介绍了过去几年对几个方面的扩张和改进(自McCarthy等人(2020年)以来)。众多语言学家的合作努力增加了67种新语言,其中包括30种濒危语言。我们已经对提取管道进行了一些改进,以解决一些问题,例如缺少性别和马克龙信息。我们还修改了模式,使用了形态学现象所需的层次结构,例如多肢体协议和案例堆叠,同时添加了一些缺失的形态特征,以使模式更具包容性。鉴于上一个UniMorph版本,我们还通过16种语言的词素分割增强了数据库。最后,这个新版本通过通过代表来自metphynet的派生过程的实例丰富数据和注释模式来推动将衍生物形态纳入UniMorph中。
translated by 谷歌翻译
自从Russo和Zou(2016,2019)和Xu and Raginsky(2017)的著名作品以来,众所周知,监督学习算法的概括性错误可以根据其输入和输出,输出和输出之间的相互信息来界定。鉴于任何固定假设的丧失都具有亚高斯的尾巴。在这项工作中,我们将此结果推广到Shannon的共同信息的标准选择之外,以衡量输入和输出之间的依赖性。 Our main result shows that it is indeed possible to replace the mutual information by any strongly convex function of the joint input-output distribution, with the subgaussianity condition on the losses replaced by a bound on an appropriately chosen norm capturing the geometry of the dependence measure 。这使我们能够得出一系列的概括范围,这些范围是全新的,或者增强了以前已知的范围。示例包括按$ p $ norm差异和Wasserstein-2距离表示的界限,这些距离分别适用于重尾损失分布和高度平滑的损失功能。我们的分析完全基于来自凸分析的基本工具,通过跟踪与依赖度量和损失函数相关的潜在功能的增长。
translated by 谷歌翻译